Artificial Intelligence & ChatGPT: Power, Potential and Problems for Nonprofit Copywriting and Beyond
“What do you think about ChatGPT?”
Seven months ago, that question would have gotten you mostly quizzical stares. Fast-forward to now, and ChatGPT is the talk of the town. If you haven’t already, better start paying attention.
These days it seems as if EVERYONE is thinking about, talking about, writing about – and fretting over – artificial intelligence. The topic’s been a slow burn for years, but the conversation caught real fire last November with OpenAI’s release of ChatGPT.
It’s only getting hotter now as AI technology spills into the mainstream and practical questions bubble up: What are the immediate applications in my day-to-day work? What about the future? How will all this AI technology impact my job? Will I even have a job?
People are also wondering what this means for society: Will AI usher in a new age of enlightenment? How will it change health care, medicine, energy, communications, conservation, war, weapons, and countless other areas? How will it change my life?
And finally, the philosophical – even existential— questions: Is this the beginning of the end of humanity? Will AI tools be put to nefarious use by bad actors and governments across the world? Or is AI itself an even greater danger – are robots going to take over?
If you haven’t kept up on your tech reading lately, those last questions might seem hyperbolic, even unhinged. But they’re not. Why do I think that? Mostly because the people raising them are more qualified than anyone else on the planet to express their concerns about what might be coming. Most pointedly, you may have heard that a San Francisco-based nonprofit called the Center for AI Safety recently issued a statement related to all this.
The statement was noteworthy for two reasons: First, because of the bright lights – some 600 of them – who put their names to it. The signers include Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, and Geoffrey Hinton and Yoshua Bengio, the last two who have been honored with the tech world’s equivalent of the Nobel Prize. The second reason the statement is worth paying attention to is for its tone and length: It’s ominous – and all of 22 words long.
That’s not the executive summary; it’s the whole statement. This is curious because AI is such a massive topic. The document could have more easily been 2,200 words. But the signers opted for brevity – that’s great marketing strategy when you want to make sure your take home message doesn’t get lost in a sea of words. And who more than this digital engineering crowd – the guys responsible for the calluses on our scrolling thumbs – to know our attention spans aren’t what they used to be?
AI, an Existential Risk… Seriously?
If you haven’t heard their statement, here it is in all its hair-raising glory: Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.
Cocktail, anyone?
It feels a little like we’re living in the opening chapters of an Arthur C. Clarke sci-fi novel – the part right before the s#%@ hits the fan. It’s not certain where this is all going, but seems like it’s time to buckle up, take courage, and resist that creeping feeling to keep your head blissfully stuck in the sand.
As fast as everything is happening, and as much, er, fun as it is to speculate about a robot takeover, the world keeps turning and most of us still need to earn a paycheck. So we may as well get the most out of the robots before we end up working for them. I think/hope it’ll be at least a few years before that happens, and in all seriousness, I believe creatives who grab the proverbial AI bull by its horns and focus on how to steer it will be surprised at how helpful it can be. Besides, to my mind, you don’t have much choice. If you keep your head buried or try to run and hide, this bull is going to gore you.
So with those brief philosophic ponderings about AI out of the way, let me turn to more pragmatic musings from my perspective as a veteran creative director, copywriter, and communicator.
In those roles, I’m curious about how marketers can tap into burgeoning AI technologies to help our nonprofit clients bond with their supporters and raise funds more efficiently for their good works.
ChatGPT – The Good & the Bad
It is truly an exciting time to be a communicator in our nonprofit world. AI tools – specifically ChatGPT – are arguably the biggest gamechanger ever to hit this marketplace. I have high hopes for what ChatGPT can achieve, but I also have concerns about how it could negatively impact fundraising and communications for nonprofit orgs.
Let’s start with the worries.
First concern: Using ChatGPT could lead to increased dehumanization and generalization in fundraising. As every copywriting pro knows, we have an extensive control panel of variables at our disposal – theme, tone, sentence length, letter length, point of view, offers, word choice, headline use, underlining, bolding, italicizing, and many, many more. There’s a whole other set of design techniques to use as well. When creative directors, writers, and graphic artists sit down to create materials, we make choices about how to use these variables to engage donors and move them to action.
Choosing among these variables and fine-tuning them for different demographics – segmenting and personalizing for donors with distinct backgrounds, interests, and motivations – is precise business. Broad-brush, one-size-fits-all efforts aren’t good enough. Donors expect – and deserve – more before they’ll part with their hard-earned dollars. Send them clunky, off-brand communications, that don’t speak to them, and what you end up saying without words is, “I really don’t know you or what you care about, friend, and frankly, you are not important enough that we’ll do better.”
Let ChatGPT lead the way in your communications and fundraising efforts and that could end up being your donors’ experience. This isn’t to say ChatGPT can’t play a key role in creating your communications and fundraising materials; it’s just better to see it as a versatile utility player on the team – a potent tool for automating some aspects of your work but in need of a capable human boss. Ultimately, the success of the collaboration depends on how skilled the boss is, first, at prompting our robot friend and, second, at shaping and finishing the unpolished gems our faithful droid collaborator throws out on the page.
Personally, at this point, after a fair amount of experimentation, I have no illusion that I can input info about an org, their campaign, etc., press a button, and expect ChatGPT to deliver fundraising gold. If my prompts are on point, subtle, and sophisticated, I’ll get quality bits and pieces, interesting turns of phrase, and lots of good words to use. In short, good-quality raw material.
I’m fine with that and I think most writers would agree that just having good raw material on the page is valuable and a surprising comfort. (Fellow writers will need no further explanation. If you’re not one, think of those people with an irrational fear of clowns and you’ll understand the effect of the blank page on many of my colleagues.) But in no way is Mr. ChaptGPT, at least for now, delivering that je ne sais quoi you get from a first-rate human copywriter who understands exactly how to channel that fierce passion – for saving the rainforest, or feeding the hungry, or rescuing abused animals, or whatever your cause – that stirs donor hearts and moves them to stroke a check.
What could possibly go wrong?
Apart from just not having the chops of a talented human copywriter, ChatGPT, left to its own devices, could be used to manipulate donors in unethical ways. AI is incredibly powerful, and it’s possible that shady organizations – and there are more than a few out there – could use it to dupe vulnerable donors.
An AI-generated letter could, for example, exaggerate the impact of the nonprofit’s programs or otherwise misrepresent its work to gin up more donations. (Yes, an unscrupulous flesh-and-blood writer could do the same, but that’s a dilemma for another column.)
Or the AI-generated letter could contain serious factual errors. For example, it refers to the organization’s shelter program as being “fully staffed and well equipped,” when in reality the organization has been struggling with staffing shortages and a lack of resources for years. This inaccuracy, introduced by our not-always-trusty AI “writer,” could seriously undermine the nonprofit’s credibility with donors and erode
their trust in the org’s leadership.
CHAT GPT – On the Team but Not in Charge
To be clear, my counsel here isn’t for you to swear off AI but rather to proceed with real care. The obstacles noted aren’t unconquerable, but they are serious and shouldn’t be brushed aside. The facts, positioning, and long-term strategic aims – and even just your organization’s “feel” – are too important be entrusted lock, stock, and barrel to AI.
Better to think of ChatGPT like a rising star in your department: “Eats up work but doesn’t take a lunch. Rarely complains. Full of clever and surprising ideas. Self-confident – sometimes brashly so, even spouting facts out of thin air. Lots of potential, will improve with more direction, experience, and shaping. Keep an eye on this one – gonna be boss one day (though not too soon, I hope).”
I have high hopes for how ChatGPT will help overworked nonprofit executives and staff get more done in coming years – and not just in the communications department but well beyond. AI stands to help orgs to better understand their donors. For example, it can analyze vast amounts of data from past fundraising efforts, identifying patterns and insights that would be difficult or impossible to tease out by human analysis alone.
Such findings will help nonprofits tailor their appeals more effectively, engage donors more successfully, and lead to higher response rates and more successful fundraising overall. More money means more programming and more good work generated by the nonprofit.
Another hope is that AI can help nonprofits break down some of the barriers to fundraising that have traditionally existed. For example, many donors are hesitant to give online because they’re worried about the security of their personal information. AI could help provide a more secure and trustworthy donation platform, easing these concerns and encouraging more donors to give online.
I also believe AI has the potential to democratize fundraising by making it easier for smaller nonprofits to compete with larger ones. While it’s true that implementing certain AI technology can be expensive, many platforms offer affordable and/or free tools. With these tools, even the smallest orgs can gain insights into their donor base and create more effective communications and fundraising campaigns.
In short, AI promises great opportunity for us to better understand and engage with donors, break down barriers to fundraising, and create a more equitable playing field that can lead to more good work being done.
I’m Optimistic. No, Worried. No, Both.
But I’m also still wondering – and worrying some – about the ways AI could go wrong. There may be some fearmongering woven into the narrative we’re all hearing about how AI technology can be abused with deepfakes, misuse of algorithms, and other misinformation, but the concerns are real. The AI and tech wizards mentioned earlier aren’t the type to sound unwarranted alarm bells – and I don’t know about you, but when folks with outsized brains like that crowd have started talking about “existential threats” from their creations, I’m listening.
One thing is for sure. Whether for good or bad, we don’t have any other option but to engage with AI, and in a big way. There’s no way we’re stuffing this genie back in the bottle.
I recognize that this reflection/ramble may come across as somewhat conflicted. The great American writer Flannery O’Connor used to say she had to write to know what she thinks. Her insight is generally true for me, and clarity typically comes when I put words on the page. But, strangely, even after thinking this through and writing about it, I’m still not certain of my feelings about AI.
At the moment, I’m fascinated, but on the fence – and I get why the folks who built this stuff are wigging out about their creation and concerned about what it will mean to human civilization.
While the stakes in the nonprofit sector are obviously just a small part of that grand conundrum, in my mind, the same perspectives seem to apply. AI is here to stay, so best to wrangle with it, use it ethically as a tool for good, and call out and resist attempts to use it for bad.
A long time ago, before AI was a thing and we were wrestling with other colossal issues, the great Stephen Stills wrote these words: “Rejoice, rejoice, we have no choice, but to carry on.” https://www.youtube.com/watch?v=lh67x9iDCjg
How’s that for an exit song to this reflection? I doubt ChatGPT could have chosen better – though to be fair, it might have curated 10 other equally valid options.
So my parting words of advice: Hang on tight, Godspeed, and embrace this ride – virtual, real, or a mix – that it seems we’re all obliged to take.